专利摘要:
The present invention relates to a data input and output device, wherein the master processors in charge of arithmetic are serially connected to an external memory to directly access data in the external memory and transfer the result back to the external memory. It took more time to access the data than the process for processing, and the larger the amount of data to be processed, the more time it took to access the data. Accordingly, the present invention has been made to solve the above-mentioned conventional problems, a plurality of parallel processors for processing the transmitted data, and accessing data of other parallel processors through the cross-network; By providing a device configured as a master processor that outputs data input from an external memory to the parallel processor, and prioritizes the data input from the plurality of parallel processors and transmits the data to an external memory. Data to be processed is stored in advance by using the buffer memory, and the data is processed according to the pipeline function, thereby greatly reducing the operation processing time.
公开号:KR20020040490A
申请号:KR1020000070549
申请日:2000-11-24
公开日:2002-05-30
发明作者:송정호
申请人:구자홍;엘지전자주식회사;
IPC主号:
专利说明:

DATA INPUT / OUTPUT APPARATUS}
[8] BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a data input and output device, and more particularly to a pipeline function, that is, a master processor for leveling a load between a master processor performing data input and output and a parallel processor in charge of data processing. The present invention relates to a data input / output device using a triple buffer memory to perform a data processing process in a parallel processor during a data input / output process.
[9] In the related art, as illustrated in FIG. 1, processors 10 responsible for arithmetic are serially connected to the external memory 20 to directly access data in the external memory 20, and the result is returned to the external memory. Move to (20).
[10] That is, it takes more time to access data than processing for operation, and the more data to process, the more time it takes to access data.
[11] Accordingly, the present invention has been made in order to solve the above-mentioned conventional problems, and an apparatus for simultaneously performing the data processing of the parallel processor during the data input and output processing of the master processing apparatus using a triple buffer memory. The purpose is to provide.
[1] 1 is a schematic view showing the configuration of a conventional data input and output device.
[2] Figure 2 is an exemplary view showing a brief configuration of the data input and output device of the present invention.
[3] 3 is an exemplary view showing a structure of a buffer memory applied to the present invention.
[4] Figure 4 is an exemplary view showing the occupancy of the buffer memory in the present invention.
[5] ********** Explanation of symbols for the main parts of the drawing **********
[6] 10: processor 20: external memory
[7] 30: master processor 40: parallel processor
[12] The configuration of the data input and output device of the present invention for achieving the above object comprises: a plurality of parallel processors for processing the transmitted data and accessing data of another parallel processor through a cross-network; And a master processor for outputting data input from an external memory to the parallel processor, and determining priorities of data input from the plurality of parallel processors and transmitting the data to the external memory.
[13] The parallel processor is characterized in that it further comprises a plurality of buffer memory for storing the input data, the algorithm for processing data and the output data of the parallel processor in triple.
[14] The parallel processor is characterized in that 32-bit, 16-bit and 8-bit access is possible in one clock cycle through the crossover network.
[15] The master processor stores the data to be processed in a predetermined area of each buffer memory in advance while each parallel processor processes the data, thereby performing pipeline processing without collision.
[16] Hereinafter, described in detail with reference to the accompanying drawings an embodiment according to the present invention.
[17] 2 is an exemplary view showing the configuration of a data input and output device of the present invention, and a plurality of parallel processors 40 for processing the data transmitted as shown therein, and accessing data of other parallel processors through a crossover network; ; The master processor outputs data input from the external memory 20 to the parallel processor 40, and prioritizes the data input from the plurality of parallel processors 40 and transmits the data to the external memory 20. 30).
[18] Referring to the accompanying drawings, the operation of an embodiment according to the present invention configured as described above will be described.
[19] As shown in FIG. 2, the master processor 30 transmits data input from the external memory 20 to each parallel processor 40 through a crossover network, and the parallel processor 40 is shown in FIG. 3. Since a built-in buffer memory (not shown) for storing the same input data, algorithms for data processing, and output data in triplicates, the master processor 30 is connected to an external bus while a complex operation is performed inside the parallel processor. By transferring the data to be processed to the input data area of the buffer memory in advance and completing the access of the external memory 20 during data processing, as shown in FIG. 4, the master processor and the parallel processor continuously do not collide with each other. Pipeline this task.
[20] In addition, the master processor 30 determines the priority of data input from each parallel processor 40 and sequentially transmits the data to the external memory 20 accordingly.
[21] In addition, each of the parallel processors 40 can also access the buffer memory of other parallel processors through the crossover network, and at this time, 8-bit to 32-bit access is possible to one clock through the crossover network.
[22] As described above, the data input / output device of the present invention stores data to be processed in advance using a triple buffer memory, thereby reducing the data operation processing time according to the pipeline function.
权利要求:
Claims (4)
[1" claim-type="Currently amended] A plurality of parallel processors for processing the transmitted data and accessing data of other parallel processors through a crossover network; And a master processor configured to output data input from an external memory to the parallel processor, and determine a priority of data input from the plurality of parallel processors and transmit the data to an external memory.
[2" claim-type="Currently amended] The data input and output device according to claim 1, wherein the parallel processor further comprises a plurality of buffer memories for storing input data, algorithms for data processing, and output data of the parallel processor in triple.
[3" claim-type="Currently amended] The data input and output device according to claim 1, wherein the parallel processor enables all 32-bit, 16-bit, and 8-bit accesses in one clock cycle through a crossover network.
[4" claim-type="Currently amended] The data input of claim 1, wherein the master processor stores data to be processed in a predetermined area of each buffer memory in advance while each parallel processor processes the data, thereby performing pipeline processing without collision. Output device.
类似技术:
公开号 | 公开日 | 专利标题
JP5859017B2|2016-02-10|Control node for processing cluster
US6038584A|2000-03-14|Synchronized MIMD multi-processing system and method of operation
EP0428770B1|1995-02-01|Data controlled array processor
US6804815B1|2004-10-12|Sequence control mechanism for enabling out of order context processing
JP3454808B2|2003-10-06|Computer processing system and computer-implemented processing method
JP4386636B2|2009-12-16|processor architecture
US5339447A|1994-08-16|Ones counting circuit, utilizing a matrix of interconnected half-adders, for counting the number of ones in a binary string of image data
US5613146A|1997-03-18|Reconfigurable SIMD/MIMD processor using switch matrix to allow access to a parameter memory by any of the plurality of processors
US7146486B1|2006-12-05|SIMD processor with scalar arithmetic logic units
DE2819571C2|1987-05-27|
US9280297B1|2016-03-08|Transactional memory that supports a put with low priority ring command
EP0458304B1|1997-10-08|Direct memory access transfer controller and use
EP0429733B1|1999-04-28|Multiprocessor with crossbar between processors and memories
US7474670B2|2009-01-06|Method and system for allocating bandwidth
US5426612A|1995-06-20|First-in first-out semiconductor memory device
US4646232A|1987-02-24|Microprocessor with integrated CPU, RAM, timer, bus arbiter data for communication system
CA1324835C|1993-11-30|Modular crossbar interconnection network for data transaction between system units in a multi-processor system
JP2688320B2|1997-12-10|Integrated circuit chip
US4969120A|1990-11-06|Data processing system for time shared access to a time slotted bus
US7313641B2|2007-12-25|Inter-processor communication system for communication between processors
US6516369B1|2003-02-04|Fair and high speed arbitration system based on rotative and weighted priority monitoring
US6895457B2|2005-05-17|Bus interface with a first-in-first-out memory
EP0154051B1|1991-03-20|Integrated and programmable processor for word-wise digital signal processing
EP0208870B1|1990-12-05|Vector data processor
US8364873B2|2013-01-29|Data transmission system and a programmable SPI controller
同族专利:
公开号 | 公开日
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
法律状态:
2000-11-24|Application filed by 구자홍, 엘지전자주식회사
2000-11-24|Priority to KR1020000070549A
2002-05-30|Publication of KR20020040490A
优先权:
申请号 | 申请日 | 专利标题
KR1020000070549A|KR20020040490A|2000-11-24|2000-11-24|Data input/output apparatus|
[返回顶部]